254 research outputs found

    A Fast Image Super-Resolution Algorithm Using an Adaptive Wiener Filter

    Get PDF
    A computationally simple super-resolution algorithm using a type of adaptive Wiener filter is proposed. The algorithm produces an improved resolution image from a sequence of low-resolution (LR) video frames with overlapping field of view. The algorithm uses subpixel registration to position each LR pixel value on a common spatial grid that is referenced to the average position of the input frames. The positions of the LR pixels are not quantized to a finite grid as with some previous techniques. The output high-resolution (HR) pixels are obtained using a weighted sum of LR pixels in a local moving window. Using a statistical model, the weights for each HR pixel are designed to minimize the mean squared error and they depend on the relative positions of the surrounding LR pixels. Thus, these weights adapt spatially and temporally to changing distributions of LR pixels due to varying motion. Both a global and spatially varying statistical model are considered here. Since the weights adapt with distribution of LR pixels, it is quite robust and will not become unstable when an unfavorable distribution of LR pixels is observed. For translational motion, the algorithm has a low computational complexity and may be readily suitable for real-time and/or near real-time processing applications. With other motion models, the computational complexity goes up significantly. However, regardless of the motion model, the algorithm lends itself to parallel implementation. The efficacy of the proposed algorithm is demonstrated here in a number of experimental results using simulated and real video sequences. A computational analysis is also presented

    Super-resolution Using Adaptive Wiener Filters

    Get PDF
    The spatial sampling rate of an imaging system is determined by the spacing of the detectors in the focal plane array (FPA). The spatial frequencies present in the image on the focal plane are band-limited by the optics. This is due to diffraction through a finite aperture. To guarantee that there will be no aliasing during image acquisiton, the Nyquist criterion dictates that the sampling rate must be greater than twice the cut-off frequency of the optics. However, optical designs involve a number of trade-offs and typical imaging systems are designed with some level of aliasing. We will refer to such systems as detector limited, as opposed to optically limited. Furthermore, with or without aliasing, imaging systems invariably suffer from diffraction blur, optical abberations, and noise. Multiframe super-resolution (SR) processing has proven to be successful in reducing aliasing and enhancing the resolution of images from detector limited imaging systems

    Scene-based nonuniformity correction with video sequences and registration

    Get PDF
    We describe a new, to our knowledge, scene-based nonuniformity correction algorithm for array detectors. The algorithm relies on the ability to register a sequence of observed frames in the presence of the fixed-pattern noise caused by pixel-to-pixel nonuniformity. In low-to-moderate levels of nonuniformity, sufficiently accurate registration may be possible with standard scene-based registration techniques. If the registration is accurate, and motion exists between the frames, then groups of independent detectors can be identified that observe the same irradiance (or true scene value). These detector outputs are averaged to generate estimates of the true scene values. With these scene estimates, and the corresponding observed values through a given detector, a curve-fitting procedure is used to estimate the individual detector response parameters. These can then be used to correct for detector nonuniformity. The strength of the algorithm lies in its simplicity and low computational complexity. Experimental results, to illustrate the performance of the algorithm, include the use of visible-range imagery with simulated nonuniformity and infrared imagery with real nonuniformity

    A Computationally Efficient U-Net Architecture for Lung Segmentation in Chest Radiographs

    Get PDF
    Lung segmentation plays a crucial role in computer-aided diagnosis using Chest Radiographs (CRs). We implement a U-Net architecture for lung segmentation in CRs across multiple publicly available datasets. We utilize a private dataset with 160 CRs provided by the Riverain Medical Group for training purposes. A publicly available dataset provided by the Japanese Radiological Scientific Technology (JRST) is used for testing. The active shape model-based results would serve as the ground truth for both these datasets. In addition, we also study the performance of our algorithm on a publicly available Shenzhen dataset which contains 566 CRs with manually segmented lungs (ground truth). Our overall performance in terms of pixel-based classification is about 98.3% and 95.6% for a set of 100 CRs in Shenzhen dataset and 140 CRs in JRST dataset. We also achieve an intersection over union value of 0.95 at a computation time of 8 seconds for the entire suite of Shenzhen testing cases

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    Robust Super-resolution by Fusion of Interpolated Frames for Color and Grayscale Images

    Get PDF
    Multi-frame super-resolution (SR) processing seeks to overcome undersampling issues that can lead to undesirable aliasing artifacts in imaging systems. A key factor in effective multi-frame SR is accurate subpixel inter-frame registration. Accurate registration is more difficult when frame-to-frame motion does not contain simple global translation and includes locally moving scene objects. SR processing is further complicated when the camera captures full color by using a Bayer color filter array (CFA). Various aspects of these SR challenges have been previously investigated. Fast SR algorithms tend to have difficulty accommodating complex motion and CFA sensors. Furthermore, methods that can tolerate these complexities tend to be iterative in nature and may not be amenable to real-time processing. In this paper, we present a new fast approach for performing SR in the presence of these challenging imaging conditions. We refer to the new approach as Fusion of Interpolated Frames (FIF) SR. The FIF SR method decouples the demosaicing, interpolation, and restoration steps to simplify the algorithm. Frames are first individually demosaiced and interpolated to the desired resolution. Next, FIF uses a novel weighted sum of the interpolated frames to fuse them into an improved resolution estimate. Finally, restoration is applied to improve any degrading camera effects. The proposed FIF approach has a lower computational complexity than many iterative methods, making it a candidate for real-time implementation. We provide a detailed description of the FIF SR method and show experimental results using synthetic and real datasets in both constrained and complex imaging scenarios. Experiments include airborne grayscale imagery and Bayer CFA image sets with affine background motion plus local motion

    Fast Super-Resolution Using an Adaptive Wiener Filter with Robustness to Local Motion

    Get PDF
    We present a new adaptive Wiener filter (AWF) super-resolution (SR) algorithm that employs a global background motion model but is also robust to limited local motion. The AWF relies on registration to populate a common high resolution (HR) grid with samples from several frames. A weighted sum of local samples is then used to perform nonuniform interpolation and image restoration simultaneously. To achieve accurate subpixel registration, we employ a global background motion model with relatively few parameters that can be estimated accurately. However, local motion may be present that includes moving objects, motion parallax, or other deviations from the background motion model. In our proposed robust approach, pixels from frames other than the reference that are inconsistent with the background motion model are detected and excluded from populating the HR grid. Here we propose and compare several local motion detection algorithms. We also propose a modified multiscale background registration method that incorporates pixel selection at each scale to minimize the impact of local motion. We demonstrate the efficacy of the new robust SR methods using several datasets, including airborne infrared data with moving vehicles and a ground resolution pattern for objective resolution analysis

    Rank Conditioned Rank Selection Filters for Signal Restoration

    Get PDF
    A class of nonlinear filters called rank conditioned rank selection (RCRS) filters is developed and analyzed in this paper. The RCRS filters are developed within the general framework of rank selection(RS) filters, which are filters constrained to output an order statistic from the observation set. Many previously proposed rank order based filters can be formulated as RS filters. The only difference between such filters is in the information used in deciding which order statistic to output. The information used by RCRS filters is the ranks of selected input samples, hence the name rank conditioned rank selection filters. The number of input sample ranks used is referred to as the order of the RCRS filter. The order can range from zero to the number of samples in the observation window, giving the filters valuable flexibility. Low-order filters can give good performance and are relatively simple to optimize and implement. If improved performance is demanded, the order can be increased but at the expense of filter simplicity. In this paper, many statistical and deterministic properties of the RCRS filters are presented. A procedure for optimizing over the class of RCRS filters is also presented. Finally, extensive computer simulation results that illustrate the performance of RCRS filters in comparison with other techniques in image restoration applications are presented

    Digital Image Processing

    Get PDF
    In recent years, digital images and digital image processing have become part of everyday life. This growth has been primarily fueled by advances in digital computers and the advent and growth of the Internet. Furthermore, commercially available digital cameras, scanners, and other equipment for acquiring, storing, and displaying digital imagery have become very inexpensive and increasingly powerful. An excellent treatment of digital images and digital image processing can be found in Ref. [1]. A digital image is simply a two-dimensional array of finite-precision numerical values called picture elements (or pixels). Thus a digital image is a spatially discrete (or discrete-space) signal. In visible grayscale images, for example, each pixel represents the intensity of a corresponding region in the scene. The grayscale values must be quantized into a finite precision format. Typical resolutions include 8 bit (256 gray levels), 12 bit (4096 gray levels), and 16 bit (65536 gray levels). Color visible images are most frequently represented by tristimulus values. These are the quantities of red, green, and blue light required, in the additive color system, to produce the desired color. Thus a so-called “RGB” color image can be thought of as a set of three “grayscale” images — the first representing the red component, the second the green, and the third the blue. Digital images can also be nonvisible in nature. This means that the physical quantity represented by the pixel values is something other than visible light intensity or color. These include radar cross-sections of an object, temperature profile (infrared imaging), X-ray images, gravitation field, etc. In general, any two-dimensional array information can be the basis for a digital image. As in the case of any digital data, the advantage of this representation is in the ability to manipulate the pixel values using a digital computer or digital hardware. This offers great power and flexibility. Furthermore, digital images can be stored and transmitted far more reliably than their analog counterparts. Error protection coding of digital imagery, for example, allows for virtually error-free transmission

    Recursive Non-Local Means Filter for Video Denoising

    Get PDF
    In this paper, we propose a computationally efficient algorithm for video denoising that exploits temporal and spatial redundancy. The proposed method is based on non-local means (NLM). NLM methods have been applied successfully in various image denoising applications. In the single-frame NLM method, each output pixel is formed as a weighted sum of the center pixels of neighboring patches, within a given search window. The weights are based on the patch intensity vector distances. The process requires computing vector distances for all of the patches in the search window. Direct extension of this method from 2D to 3D, for video processing, can be computationally demanding. Note that the size of a 3D search window is the size of the 2D search window multiplied by the number of frames being used to form the output. Exploiting a large number of frames in this manner can be prohibitive for real-time video processing. Here, we propose a novel recursive NLM (RNLM) algorithm for video processing. Our RNLM method takes advantage of recursion for computational savings, compared with the direct 3D NLM. However, like the 3D NLM, our method is still able to exploit both spatial and temporal redundancy for improved performance, compared with 2D NLM. In our approach, the first frame is processed with single-frame NLM. Subsequent frames are estimated using a weighted sum of pixels from the current frame and a pixel from the previous frame estimate. Only the single best matching patch from the previous estimate is incorporated into the current estimate. Several experimental results are presented here to demonstrate the efficacy of our proposed method in terms of quantitative and subjective image quality
    • …
    corecore